Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Br J Ophthalmol ; 108(3): 432-439, 2024 02 21.
Artículo en Inglés | MEDLINE | ID: mdl-36596660

RESUMEN

BACKGROUND: Optical coherence tomography angiography (OCTA) enables fast and non-invasive high-resolution imaging of retinal microvasculature and is suggested as a potential tool in the early detection of retinal microvascular changes in Alzheimer's Disease (AD). We developed a standardised OCTA analysis framework and compared their extracted parameters among controls and AD/mild cognitive impairment (MCI) in a cross-section study. METHODS: We defined and extracted geometrical parameters of retinal microvasculature at different retinal layers and in the foveal avascular zone (FAZ) from segmented OCTA images obtained using well-validated state-of-the-art deep learning models. We studied these parameters in 158 subjects (62 healthy control, 55 AD and 41 MCI) using logistic regression to determine their potential in predicting the status of our subjects. RESULTS: In the AD group, there was a significant decrease in vessel area and length densities in the inner vascular complexes (IVC) compared with controls. The number of vascular bifurcations in AD is also significantly lower than that of healthy people. The MCI group demonstrated a decrease in vascular area, length densities, vascular fractal dimension and the number of bifurcations in both the superficial vascular complexes (SVC) and the IVC compared with controls. A larger vascular tortuosity in the IVC, and a larger roundness of FAZ in the SVC, can also be observed in MCI compared with controls. CONCLUSION: Our study demonstrates the applicability of OCTA for the diagnosis of AD and MCI, and provides a standard tool for future clinical service and research. Biomarkers from retinal OCTA images can provide useful information for clinical decision-making and diagnosis of AD and MCI.


Asunto(s)
Enfermedad de Alzheimer , Disfunción Cognitiva , Humanos , Angiografía con Fluoresceína/métodos , Vasos Retinianos/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos , Enfermedad de Alzheimer/diagnóstico por imagen , Microvasos/diagnóstico por imagen , Disfunción Cognitiva/diagnóstico por imagen
2.
Sensors (Basel) ; 23(20)2023 Oct 23.
Artículo en Inglés | MEDLINE | ID: mdl-37896742

RESUMEN

With the advent of autonomous vehicles, sensors and algorithm testing have become crucial parts of the autonomous vehicle development cycle. Having access to real-world sensors and vehicles is a dream for researchers and small-scale original equipment manufacturers (OEMs) due to the software and hardware development life-cycle duration and high costs. Therefore, simulator-based virtual testing has gained traction over the years as the preferred testing method due to its low cost, efficiency, and effectiveness in executing a wide range of testing scenarios. Companies like ANSYS and NVIDIA have come up with robust simulators, and open-source simulators such as CARLA have also populated the market. However, there is a lack of lightweight and simple simulators catering to specific test cases. In this paper, we introduce the SLAV-Sim, a lightweight simulator that specifically trains the behaviour of a self-learning autonomous vehicle. This simulator has been created using the Unity engine and provides an end-to-end virtual testing framework for different reinforcement learning (RL) algorithms in a variety of scenarios using camera sensors and raycasts.

3.
Med Phys ; 50(12): 7654-7669, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37278312

RESUMEN

BACKGROUND: Various types of noise artifacts inevitably exist in some medical imaging modalities due to limitations of imaging techniques, which impair either clinical diagnosis or subsequent analysis. Recently, deep learning approaches have been rapidly developed and applied on medical images for noise removal or image quality enhancement. Nevertheless, due to complexity and diversity of noise distribution representations in different medical imaging modalities, most of the existing deep learning frameworks are incapable to flexibly remove noise artifacts while retaining detailed information. As a result, it remains challenging to design an effective and unified medical image denoising method that will work across a variety of noise artifacts for different imaging modalities without requiring specialized knowledge in performing the task. PURPOSE: In this paper, we propose a novel encoder-decoder architecture called Swin transformer-based residual u-shape Network (StruNet), for medical image denoising. METHODS: Our StruNet adopts a well-designed block as the backbone of the encoder-decoder architecture, which integrates Swin Transformer modules with residual block in parallel connection. Swin Transformer modules could effectively learn hierarchical representations of noise artifacts via self-attention mechanism in non-overlapping shifted windows and cross-window connection, while residual block is advantageous to compensate loss of detailed information via shortcut connection. Furthermore, perceptual loss and low-rank regularization are incorporated into loss function respectively in order to constrain the denoising results on feature-level consistency and low-rank characteristics. RESULTS: To evaluate the performance of the proposed method, we have conducted experiments on three medical imaging modalities including computed tomography (CT), optical coherence tomography (OCT) and optical coherence tomography angiography (OCTA). CONCLUSIONS: The results demonstrate that the proposed architecture yields a promising performance of suppressing multiform noise artifacts existing in different imaging modalities.


Asunto(s)
Retraso en el Despertar Posanestésico , Humanos , Angiografía , Aumento de la Imagen , Tomografía de Coherencia Óptica , Tomografía Computarizada por Rayos X , Procesamiento de Imagen Asistido por Computador , Relación Señal-Ruido
4.
IEEE Trans Pattern Anal Mach Intell ; 45(11): 13083-13099, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37335789

RESUMEN

While 3D visual saliency aims to predict regional importance of 3D surfaces in agreement with human visual perception and has been well researched in computer vision and graphics, latest work with eye-tracking experiments shows that state-of-the-art 3D visual saliency methods remain poor at predicting human fixations. Cues emerging prominently from these experiments suggest that 3D visual saliency might associate with 2D image saliency. This paper proposes a framework that combines a Generative Adversarial Network and a Conditional Random Field for learning visual saliency of both a single 3D object and a scene composed of multiple 3D objects with image saliency ground truth to 1) investigate whether 3D visual saliency is an independent perceptual measure or just a derivative of image saliency and 2) provide a weakly supervised method for more accurately predicting 3D visual saliency. Through extensive experiments, we not only demonstrate that our method significantly outperforms the state-of-the-art approaches, but also manage to answer the interesting and worthy question proposed within the title of this paper.

5.
Artículo en Inglés | MEDLINE | ID: mdl-36103441

RESUMEN

Over the past few years, a significant progress has been made in deep convolutional neural networks (CNNs)-based image recognition. This is mainly due to the strong ability of such networks in mining discriminative object pose and parts information from texture and shape. This is often inappropriate for fine-grained visual classification (FGVC) since it exhibits high intra-class and low inter-class variances due to occlusions, deformation, illuminations, etc. Thus, an expressive feature representation describing global structural information is a key to characterize an object/ scene. To this end, we propose a method that effectively captures subtle changes by aggregating context-aware features from most relevant image-regions and their importance in discriminating fine-grained categories avoiding the bounding-box and/or distinguishable part annotations. Our approach is inspired by the recent advancement in self-attention and graph neural networks (GNNs) approaches to include a simple yet effective relation-aware feature transformation and its refinement using a context-aware attention mechanism to boost the discriminability of the transformed feature in an end-to-end learning process. Our model is evaluated on eight benchmark datasets consisting of fine-grained objects and human-object interactions. It outperforms the state-of-the-art approaches by a significant margin in recognition accuracy.

6.
IEEE Trans Med Imaging ; 41(12): 3969-3980, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36044489

RESUMEN

Automated detection of retinal structures, such as retinal vessels (RV), the foveal avascular zone (FAZ), and retinal vascular junctions (RVJ), are of great importance for understanding diseases of the eye and clinical decision-making. In this paper, we propose a novel Voting-based Adaptive Feature Fusion multi-task network (VAFF-Net) for joint segmentation, detection, and classification of RV, FAZ, and RVJ in optical coherence tomography angiography (OCTA). A task-specific voting gate module is proposed to adaptively extract and fuse different features for specific tasks at two levels: features at different spatial positions from a single encoder, and features from multiple encoders. In particular, since the complexity of the microvasculature in OCTA images makes simultaneous precise localization and classification of retinal vascular junctions into bifurcation/crossing a challenging task, we specifically design a task head by combining the heatmap regression and grid classification. We take advantage of three different en face angiograms from various retinal layers, rather than following existing methods that use only a single en face. We carry out extensive experiments on three OCTA datasets acquired using different imaging devices, and the results demonstrate that the proposed method performs on the whole better than either the state-of-the-art single-purpose methods or existing multi-task learning solutions. We also demonstrate that our multi-task learning method generalizes across other imaging modalities, such as color fundus photography, and may potentially be used as a general multi-task learning tool. We also construct three datasets for multiple structure detection, and part of these datasets with the source code and evaluation benchmark have been released for public access.


Asunto(s)
Vasos Retinianos , Tomografía de Coherencia Óptica , Tomografía de Coherencia Óptica/métodos , Angiografía con Fluoresceína/métodos , Vasos Retinianos/diagnóstico por imagen , Fondo de Ojo , Retina/diagnóstico por imagen
7.
IEEE Trans Med Imaging ; 41(8): 2079-2091, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35245193

RESUMEN

Accurate estimation and quantification of the corneal nerve fiber tortuosity in corneal confocal microscopy (CCM) is of great importance for disease understanding and clinical decision-making. However, the grading of corneal nerve tortuosity remains a great challenge due to the lack of agreements on the definition and quantification of tortuosity. In this paper, we propose a fully automated deep learning method that performs image-level tortuosity grading of corneal nerves, which is based on CCM images and segmented corneal nerves to further improve the grading accuracy with interpretability principles. The proposed method consists of two stages: 1) A pre-trained feature extraction backbone over ImageNet is fine-tuned with a proposed novel bilinear attention (BA) module for the prediction of the regions of interest (ROIs) and coarse grading of the image. The BA module enhances the ability of the network to model long-range dependencies and global contexts of nerve fibers by capturing second-order statistics of high-level features. 2) An auxiliary tortuosity grading network (AuxNet) is proposed to obtain an auxiliary grading over the identified ROIs, enabling the coarse and additional gradings to be finally fused together for more accurate final results. The experimental results show that our method surpasses existing methods in tortuosity grading, and achieves an overall accuracy of 85.64% in four-level classification. We also validate it over a clinical dataset, and the statistical analysis demonstrates a significant difference of tortuosity levels between healthy control and diabetes group. We have released a dataset with 1500 CCM images and their manual annotations of four tortuosity levels for public access. The code is available at: https://github.com/iMED-Lab/TortuosityGrading.


Asunto(s)
Aprendizaje Profundo , Córnea/diagnóstico por imagen , Microscopía Confocal/métodos , Fibras Nerviosas
8.
Med Image Anal ; 75: 102217, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34775280

RESUMEN

Parapneumonic effusion (PPE) is a common condition that causes death in patients hospitalized with pneumonia. Rapid distinction of complicated PPE (CPPE) from uncomplicated PPE (UPPE) in Computed Tomography (CT) scans is of great importance for the management and medical treatment of PPE. However, UPPE and CPPE display similar appearances in CT scans, and it is challenging to distinguish CPPE from UPPE via a single 2D CT image, whether attempted by a human expert, or by any of the existing disease classification approaches. 3D convolutional neural networks (CNNs) can utilize the entire 3D volume for classification: however, they typically suffer from the intrinsic defect of over-fitting. Therefore, it is important to develop a method that not only overcomes the heavy memory and computational requirements of 3D CNNs, but also leverages the 3D information. In this paper, we propose an uncertainty-guided graph attention network (UG-GAT) that can automatically extract and integrate information from all CT slices in a 3D volume for classification into UPPE, CPPE, and normal control cases. Specifically, we frame the distinction of different cases as a graph classification problem. Each individual is represented as a directed graph with a topological structure, where vertices represent the image features of slices, and edges encode the spatial relationship between them. To estimate the contribution of each slice, we first extract the slice representations with uncertainty, using a Bayesian CNN: we then make use of the uncertainty information to weight each slice during the graph prediction phase in order to enable more reliable decision-making. We construct a dataset consisting of 302 chest CT volumetric data from different subjects (99 UPPE, 99 CPPE and 104 normal control cases) in this study, and to the best of our knowledge, this is the first attempt to classify UPPE, CPPE and normal cases using a deep learning method. Extensive experiments show that our approach is lightweight in demands, and outperforms accepted state-of-the-art methods by a large margin. Code is available at https://github.com/iMED-Lab/UG-GAT.


Asunto(s)
Derrame Pleural , Neumonía , Teorema de Bayes , Diagnóstico Diferencial , Humanos , Derrame Pleural/diagnóstico , Neumonía/diagnóstico , Incertidumbre
9.
J Mol Graph Model ; 111: 108103, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34959149

RESUMEN

Proteins are essential to nearly all cellular mechanism and the effectors of the cells activities. As such, they often interact through their surface with other proteins or other cellular ligands such as ions or organic molecules. The evolution generates plenty of different proteins, with unique abilities, but also proteins with related functions hence similar 3D surface properties (shape, physico-chemical properties, …). The protein surfaces are therefore of primary importance for their activity. In the present work, we assess the ability of different methods to detect such similarities based on the geometry of the protein surfaces (described as 3D meshes), using either their shape only, or their shape and the electrostatic potential (a biologically relevant property of proteins surface). Five different groups participated in this contest using the shape-only dataset, and one group extended its pre-existing method to handle the electrostatic potential. Our comparative study reveals both the ability of the methods to detect related proteins and their difficulties to distinguish between highly related proteins. Our study allows also to analyze the putative influence of electrostatic information in addition to the one of protein shapes alone. Finally, the discussion permits to expose the results with respect to ones obtained in the previous contests for the extended method. The source codes of each presented method have been made available online.


Asunto(s)
Proteínas , Ligandos , Modelos Moleculares , Dominios Proteicos , Electricidad Estática
10.
IEEE Trans Med Imaging ; 40(12): 3955-3967, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34339369

RESUMEN

The development of medical imaging techniques has greatly supported clinical decision making. However, poor imaging quality, such as non-uniform illumination or imbalanced intensity, brings challenges for automated screening, analysis and diagnosis of diseases. Previously, bi-directional GANs (e.g., CycleGAN), have been proposed to improve the quality of input images without the requirement of paired images. However, these methods focus on global appearance, without imposing constraints on structure or illumination, which are essential features for medical image interpretation. In this paper, we propose a novel and versatile bi-directional GAN, named Structure and illumination constrained GAN (StillGAN), for medical image quality enhancement. Our StillGAN treats low- and high-quality images as two distinct domains, and introduces local structure and illumination constraints for learning both overall characteristics and local details. Extensive experiments on three medical image datasets (e.g., corneal confocal microscopy, retinal color fundus and endoscopy images) demonstrate that our method performs better than both conventional methods and other deep learning-based methods. In addition, we have investigated the impact of the proposed method on different medical image analysis and clinical tasks such as nerve segmentation, tortuosity grading, fovea localization and disease classification.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Iluminación , Fondo de Ojo , Aumento de la Imagen , Microscopía Confocal
11.
Front Plant Sci ; 12: 608732, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33841454

RESUMEN

The 3D analysis of plants has become increasingly effective in modeling the relative structure of organs and other traits of interest. In this paper, we introduce a novel pattern-based deep neural network, Pattern-Net, for segmentation of point clouds of wheat. This study is the first to segment the point clouds of wheat into defined organs and to analyse their traits directly in 3D space. Point clouds have no regular grid and thus their segmentation is challenging. Pattern-Net creates a dynamic link among neighbors to seek stable patterns from a 3D point set across several levels of abstraction using the K-nearest neighbor algorithm. To this end, different layers are connected to each other to create complex patterns from the simple ones, strengthen dynamic link propagation, alleviate the vanishing-gradient problem, encourage link reuse and substantially reduce the number of parameters. The proposed deep network is capable of analysing and decomposing unstructured complex point clouds into semantically meaningful parts. Experiments on a wheat dataset verify the effectiveness of our approach for segmentation of wheat in 3D space.

12.
IEEE Trans Image Process ; 30: 3691-3704, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33705316

RESUMEN

This article presents a novel keypoints-based attention mechanism for visual recognition in still images. Deep Convolutional Neural Networks (CNNs) for recognizing images with distinctive classes have shown great success, but their performance in discriminating fine-grained changes is not at the same level. We address this by proposing an end-to-end CNN model, which learns meaningful features linking fine-grained changes using our novel attention mechanism. It captures the spatial structures in images by identifying semantic regions (SRs) and their spatial distributions, and is proved to be the key to modeling subtle changes in images. We automatically identify these SRs by grouping the detected keypoints in a given image. The "usefulness" of these SRs for image recognition is measured using our innovative attentional mechanism focusing on parts of the image that are most relevant to a given task. This framework applies to traditional and fine-grained image recognition tasks and does not require manually annotated regions (e.g. bounding-box of body parts, objects, etc.) for learning and prediction. Moreover, the proposed keypoints-driven attention mechanism can be easily integrated into the existing CNN models. The framework is evaluated on six diverse benchmark datasets. The model outperforms the state-of-the-art approaches by a considerable margin using Distracted Driver V1 (Acc: 3.39%), Distracted Driver V2 (Acc: 6.58%), Stanford-40 Actions (mAP: 2.15%), People Playing Musical Instruments (mAP: 16.05%), Food-101 (Acc: 6.30%) and Caltech-256 (Acc: 2.59%) datasets.


Asunto(s)
Aprendizaje Profundo , Actividades Humanas/clasificación , Procesamiento de Imagen Asistido por Computador/métodos , Femenino , Humanos , Masculino , Semántica
13.
IEEE Trans Vis Comput Graph ; 27(1): 151-164, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31329121

RESUMEN

Recently, effort has been made to apply deep learning to the detection of mesh saliency. However, one major barrier is to collect a large amount of vertex-level annotation as saliency ground truth for training the neural networks. Quite a few pilot studies showed that this task is difficult. In this work, we solve this problem by developing a novel network trained in a weakly supervised manner. The training is end-to-end and does not require any saliency ground truth but only the class membership of meshes. Our Classification-for-Saliency CNN (CfS-CNN) employs a multi-view setup and contains a newly designed two-channel structure which integrates view-based features of both classification and saliency. It essentially transfers knowledge from 3D object classification to mesh saliency. Our approach significantly outperforms the existing state-of-the-art methods according to extensive experimental results. Also, the CfS-CNN can be directly used for scene saliency. We showcase two novel applications based on scene saliency to demonstrate its utility.

14.
Med Image Anal ; 67: 101874, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33166771

RESUMEN

Automated detection of curvilinear structures, e.g., blood vessels or nerve fibres, from medical and biomedical images is a crucial early step in automatic image interpretation associated to the management of many diseases. Precise measurement of the morphological changes of these curvilinear organ structures informs clinicians for understanding the mechanism, diagnosis, and treatment of e.g. cardiovascular, kidney, eye, lung, and neurological conditions. In this work, we propose a generic and unified convolution neural network for the segmentation of curvilinear structures and illustrate in several 2D/3D medical imaging modalities. We introduce a new curvilinear structure segmentation network (CS2-Net), which includes a self-attention mechanism in the encoder and decoder to learn rich hierarchical representations of curvilinear structures. Two types of attention modules - spatial attention and channel attention - are utilized to enhance the inter-class discrimination and intra-class responsiveness, to further integrate local features with their global dependencies and normalization, adaptively. Furthermore, to facilitate the segmentation of curvilinear structures in medical images, we employ a 1×3 and a 3×1 convolutional kernel to capture boundary features. Besides, we extend the 2D attention mechanism to 3D to enhance the network's ability to aggregate depth information across different layers/slices. The proposed curvilinear structure segmentation network is thoroughly validated using both 2D and 3D images across six different imaging modalities. Experimental results across nine datasets show the proposed method generally outperforms other state-of-the-art algorithms in various metrics.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador , Imagenología Tridimensional , Redes Neurales de la Computación
15.
IEEE Trans Image Process ; 30: 1153-1168, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33306465

RESUMEN

Scale-invariance, good localization and robustness to noise and distortions are the main properties that a local feature detector should possess. Most existing local feature detectors find excessive unstable feature points that increase the number of keypoints to be matched and the computational time of the matching step. In this paper, we show that robust and accurate keypoints exist in the specific scale-space domain. To this end, we first formulate the superimposition problem into a mathematical model and then derive a closed-form solution for multiscale analysis. The model is formulated via difference-of-Gaussian (DoG) kernels in the continuous scale-space domain, and it is proved that setting the scale-space pyramid's blurring ratio and smoothness to 2 and 0.627, respectively, facilitates the detection of reliable keypoints. For the applicability of the proposed model to discrete images, we discretize it using the undecimated wavelet transform and the cubic spline function. Theoretically, the complexity of our method is less than 5% of that of the popular baseline Scale Invariant Feature Transform (SIFT). Extensive experimental results show the superiority of the proposed feature detector over the existing representative hand-crafted and learning-based techniques in accuracy and computational time. The code and supplementary materials can be found at https://github.com/mogvision/FFD.

16.
IEEE Trans Med Imaging ; 39(9): 2725-2737, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32078542

RESUMEN

Precise characterization and analysis of corneal nerve fiber tortuosity are of great importance in facilitating examination and diagnosis of many eye-related diseases. In this paper we propose a fully automated method for image-level tortuosity estimation, comprising image enhancement, exponential curvature estimation, and tortuosity level classification. The image enhancement component is based on an extended Retinex model, which not only corrects imbalanced illumination and improves image contrast in an image, but also models noise explicitly to aid removal of imaging noise. Afterwards, we take advantage of exponential curvature estimation in the 3D space of positions and orientations to directly measure curvature based on the enhanced images, rather than relying on the explicit segmentation and skeletonization steps in a conventional pipeline usually with accumulated pre-processing errors. The proposed method has been applied over two corneal nerve microscopy datasets for the estimation of a tortuosity level for each image. The experimental results show that it performs better than several selected state-of-the-art methods. Furthermore, we have performed manual gradings at tortuosity level of four hundred and three corneal nerve microscopic images, and this dataset has been released for public access to facilitate other researchers in the community in carrying out further research on the same and related topics.


Asunto(s)
Córnea , Fibras Nerviosas , Córnea/diagnóstico por imagen , Aumento de la Imagen , Microscopía Confocal
17.
IEEE Trans Med Imaging ; 39(2): 341-356, 2020 02.
Artículo en Inglés | MEDLINE | ID: mdl-31283498

RESUMEN

The estimation of vascular network topology in complex networks is important in understanding the relationship between vascular changes and a wide spectrum of diseases. Automatic classification of the retinal vascular trees into arteries and veins is of direct assistance to the ophthalmologist in terms of diagnosis and treatment of eye disease. However, it is challenging due to their projective ambiguity and subtle changes in appearance, contrast, and geometry in the imaging process. In this paper, we propose a novel method that is capable of making the artery/vein (A/V) distinction in retinal color fundus images based on vascular network topological properties. To this end, we adapt the concept of dominant set clustering and formalize the retinal blood vessel topology estimation and the A/V classification as a pairwise clustering problem. The graph is constructed through image segmentation, skeletonization, and identification of significant nodes. The edge weight is defined as the inverse Euclidean distance between its two end points in the feature space of intensity, orientation, curvature, diameter, and entropy. The reconstructed vascular network is classified into arteries and veins based on their intensity and morphology. The proposed approach has been applied to five public databases, namely INSPIRE, IOSTAR, VICAVR, DRIVE, and WIDE, and achieved high accuracies of 95.1%, 94.2%, 93.8%, 91.1%, and 91.0%, respectively. Furthermore, we have made manual annotations of the blood vessel topologies for INSPIRE, IOSTAR, VICAVR, and DRIVE datasets, and these annotations are released for public access so as to facilitate researchers in the community.


Asunto(s)
Análisis por Conglomerados , Procesamiento de Imagen Asistido por Computador/métodos , Arteria Retiniana/diagnóstico por imagen , Vena Retiniana/diagnóstico por imagen , Algoritmos , Bases de Datos Factuales , Oftalmopatías/diagnóstico por imagen , Fondo de Ojo , Humanos
18.
IEEE Trans Vis Comput Graph ; 26(6): 2204-2218, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-30530330

RESUMEN

An importance measure of 3D objects inspired by human perception has a range of applications since people want computers to behave like humans in many tasks. This paper revisits a well-defined measure, distinction of 3D surface mesh, which indicates how important a region of a mesh is with respect to classification. We develop a method to compute it based on a classification network and a Markov Random Field (MRF). The classification network learns view-based distinction by handling multiple views of a 3D object. Using a classification network has an advantage of avoiding the training data problem which has become a major obstacle of applying deep learning to 3D object understanding tasks. The MRF estimates the parameters of a linear model for combining the view-based distinction maps. The experiments using several publicly accessible datasets show that the distinctive regions detected by our method are not just significantly different from those detected by methods based on handcrafted features, but more consistent with human perception. We also compare it with other perceptual measures and quantitatively evaluate its performance in the context of two applications. Furthermore, due to the view-based nature of our method, we are able to easily extend mesh distinction to 3D scenes containing multiple objects.

19.
Med Phys ; 46(10): 4531-4544, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31381173

RESUMEN

BACKGROUND AND OBJECTIVE: The detection of abnormalities such as lesions or leakage from retinal images is an important health informatics task for automated early diagnosis of diabetic and malarial retinopathy or other eye diseases, in order to prevent blindness and common systematic conditions. In this work, we propose a novel retinal lesion detection method by adapting the concepts of saliency. METHODS: Retinal images are first segmented as superpixels, two new saliency feature representations: uniqueness and compactness, are then derived to represent the superpixels. The pixel level saliency is then estimated from these superpixel saliency values via a bilateral filter. These extracted saliency features form a matrix for low-rank analysis to achieve saliency detection. The precise contour of a lesion is finally extracted from the generated saliency map after removing confounding structures such as blood vessels, the optic disk, and the fovea. The main novelty of this method is that it is an effective tool for detecting different abnormalities at the pixel level from different modalities of retinal images, without the need to tune parameters. RESULTS: To evaluate its effectiveness, we have applied our method to seven public datasets of diabetic and malarial retinopathy with four different types of lesions: exudate, hemorrhage, microaneurysms, and leakage. The evaluation was undertaken at the pixel level, lesion level, or image level according to ground truth availability in these datasets. CONCLUSIONS: The experimental results show that the proposed method outperforms existing state-of-the-art ones in applicability, effectiveness, and accuracy.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagen Molecular , Retina/diagnóstico por imagen , Automatización , Vasos Sanguíneos/diagnóstico por imagen
20.
IEEE Trans Med Imaging ; 37(2): 438-450, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-28952938

RESUMEN

Automated detection of vascular structures is of great importance in understanding the mechanism, diagnosis, and treatment of many vascular pathologies. However, automatic vascular detection continues to be an open issue because of difficulties posed by multiple factors, such as poor contrast, inhomogeneous backgrounds, anatomical variations, and the presence of noise during image acquisition. In this paper, we propose a novel 2-D/3-D symmetry filter to tackle these challenging issues for enhancing vessels from different imaging modalities. The proposed filter not only considers local phase features by using a quadrature filter to distinguish between lines and edges, but also uses the weighted geometric mean of the blurred and shifted responses of the quadrature filter, which allows more tolerance of vessels with irregular appearance. As a result, this filter shows a strong response to the vascular features under typical imaging conditions. Results based on eight publicly available datasets (six 2-D data sets, one 3-D data set, and one 3-D synthetic data set) demonstrate its superior performance to other state-of-the-art methods.


Asunto(s)
Algoritmos , Angiografía/métodos , Imagenología Tridimensional/métodos , Imagen Multimodal/métodos , Bases de Datos Factuales , Humanos , Vasos Retinianos/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...